The Attraction Indian Buffet Distribution
نویسندگان
چکیده
We propose the attraction Indian buffet distribution (AIBD), a for binary feature matrices influenced by pairwise similarity information. Binary are used in Bayesian models to uncover latent variables (i.e., features) that explain observed data. The process (IBP) is popular exchangeable prior matrices. In presence of additional information, however, exchangeability assumption not reasonable or desirable. AIBD can incorporate yet it preserves many properties IBP, including total number features. Thus, much interpretation and intuition one has IBP directly carries over AIBD. A temperature parameter controls degree which information affects feature-sharing between observations. Unlike other nonexchangeable distributions allocations, probability mass function tractable normalizing constant, making posterior inference on hyperparameters straight-forward using standard MCMC methods. novel sampling algorithm proposed demonstrate feasibility as allocation compare performance competing methods simulations an application.
منابع مشابه
Dependent Indian Buffet Processes
Latent variable models represent hidden structure in observational data. To account for the distribution of the observational data changing over time, space or some other covariate, we need generalizations of latent variable models that explicitly capture this dependency on the covariate. A variety of such generalizations has been proposed for latent variable models based on the Dirichlet proce...
متن کاملRestricted Indian buffet processes
Latent feature models are a powerful tool for modeling data with globally-shared features. Nonparametric exchangeable models such as the Indian Buffet Process offer modeling flexibility by letting the number of latent features be unbounded. However, current models impose implicit distributions over the number of latent features per data point, and these implicit distributions may not match our ...
متن کاملBayesian Statistics: Indian Buffet Process
A common goal of unsupervised learning is to discover the latent variables responsible for generating the observed properties of a set of objects. For example, factor analysis attempts to find a set of latent variables (or factors) that explain the correlations among the observed variables. A problem with factor analysis, however, is that the user has to specify the number of latent variables w...
متن کاملVariational Inference for the Indian Buffet Process
The Indian Buffet Process (IBP) is a nonparametric prior for latent feature models in which observations are influenced by a combination of hidden features. For example, images may be composed of several objects and sounds may consist of several notes. Latent feature models seek to infer these unobserved features from a set of observations; the IBP provides a principled prior in situations wher...
متن کاملIndian Buffet Process Dictionary Learning : algorithms
Ill-posed inverse problems call for some prior model to define a suitable set of solutions. A wide family of approaches relies on the use of sparse representations. Dictionary learning precisely permits to learn a redundant set of atoms to represent the data in a sparse manner. Various approaches have been proposed, mostly based on optimization methods. We propose a Bayesian non parametric appr...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Bayesian Analysis
سال: 2022
ISSN: ['1936-0975', '1931-6690']
DOI: https://doi.org/10.1214/21-ba1279